Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Traffic sign detection algorithm based on improved attention mechanism
Xinyu ZHANG, Sheng DING, Zhipei YANG
Journal of Computer Applications    2022, 42 (8): 2378-2385.   DOI: 10.11772/j.issn.1001-9081.2021061005
Abstract473)   HTML33)    PDF (1664KB)(267)       Save

In some scenes, the low resolution, coverage and other environmental factors of traffic signs lead to missed and false detections in object detection tasks. Therefore, a traffic sign detection algorithm based on improved attention mechanism was proposed. First of all, in response to the problem of low image resolution due to damage, lighting and other environmental impacts of traffic signs, which leaded to the limited extraction of image feature information by the network, an attention module was added to the backbone network to enhance the key features of the object area. Secondly, the local features between adjacent channels in the feature map had a certain correlation due to the overlap of the receptive fields, a one-dimensional convolution of size k was used to replace the fully connected layer in the channel attention module to aggregate different channel information and reduce the number of additional parameters. Finally, the receptive field module was introduced in the medium- and small-scale feature layers of Path Aggregation Network (PANet) to increase the receptive field of the feature map to fuse the context information of the object area and improve the network’s ability to detect traffic signs. Experimental results on CSUST Chinese Traffic Sign Detection Benchmark (CCTSDB) dataset show that the proposed improved You Only Look Once v4 (YOLOv4) algorithm achieve an average detection speed with a small amount of parameters introduced and the detection speed is not much different from that of the original algorithm. The mean Accuracy Precision (mAP) reached 96.88%, which was increased by 1.48%; compared with the lightweight network YOLOv5s, with the single frame detection speed of 10?ms slower, the mAP of the proposed algorithm is 3.40 percentage points higher than that of YOLOv5s, and the speed reached 40?frame/s, indicating that the algorithm meets the real-time requirements of object detection completely.

Table and Figures | Reference | Related Articles | Metrics
Anchor-free remote sensing image detection method for dense objects with rotation
Zhipei YANG, Sheng DING, Li ZHANG, Xinyu ZHANG
Journal of Computer Applications    2022, 42 (6): 1965-1971.   DOI: 10.11772/j.issn.1001-9081.2021060890
Abstract239)   HTML5)    PDF (4079KB)(91)       Save

Aiming at the problems of high missed rate and inaccurate classification of dense objects in remote sensing image detection methods based on deep learning, an anchor-free deep learning-based detection method for dense objects with rotation was established. Firstly, CenterNet was used as the baseline network, features were extracted through the backbone network, and the original detector structure was improved, which means an angle regression branch was added to perform object angle regression. Then, a feature enhancement module based on asymmetric convolution was proposed, and the feature map extracted by the backbone network was put into the feature enhancement module to enhance the rotation invariant feature of the object, reduce the influence caused by the rotation and turnover of the object, and improve the regression precision of the center point and size information of the object. When using HourGlass-101 as the backbone network, compared with Rotation Region Proposal Network (RRPN), the proposed method achieved a 7.80 percentage point improvement in Mean Average Precision (mAP) and 7.50 improvement in Frames Per Second (FPS) on DOTA dataset. On the self-built dataset Ship3, the proposed method achieved a 8.68 percentage point improvement in mAP and 6.5 improvement vin FPS. The results show that the proposed method can obtain a balance between detection precision and speed.

Table and Figures | Reference | Related Articles | Metrics